Empirical risk minimization for heavy-tailed losses
نویسندگان
چکیده
منابع مشابه
Empirical risk minimization for heavy-tailed losses
The purpose of this paper is to discuss empirical risk minimization when the losses are not necessarily bounded and may have a distribution with heavy tails. In such situations usual empirical averages may fail to provide reliable estimates and empirical risk minimization may provide large excess risk. However, some robust mean estimators proposed in the literature may be used to replace empiri...
متن کاملFast learning rates with heavy-tailed losses
We study fast learning rates when the losses are not necessarily bounded and may have a distribution with heavy tails. To enable such analyses, we introduce two new conditions: (i) the envelope function supf∈F |` ◦ f |, where ` is the loss function and F is the hypothesis class, exists and is L-integrable, and (ii) ` satisfies the multi-scale Bernstein’s condition on F . Under these assumptions...
متن کاملChaining Bounds for Empirical Risk Minimization
This paper extends the standard chaining technique to prove excess risk upper bounds for empirical risk minimization with random design settings even if the magnitude of the noise and the estimates is unbounded. The bound applies to many loss functions besides the squared loss, and scales only with the sub-Gaussian or subexponential parameters without further statistical assumptions such as the...
متن کاملEmpirical risk minimization for support vector classifiers
In this paper, we propose a general technique for solving support vector classifiers (SVCs) for an arbitrary loss function, relying on the application of an iterative reweighted least squares (IRWLS) procedure. We further show that three properties of the SVC solution can be written as conditions over the loss function. This technique allows the implementation of the empirical risk minimization...
متن کاملmixup: BEYOND EMPIRICAL RISK MINIMIZATION
Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple lin...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: The Annals of Statistics
سال: 2015
ISSN: 0090-5364
DOI: 10.1214/15-aos1350